Skip to content

Conversation

@torresmateo
Copy link
Collaborator

@torresmateo torresmateo commented Dec 10, 2025

Preview here: https://docs-git-mateo-dev-25-write-connecting-arcade-a84eaa-arcade-ai.vercel.app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python


Note

Introduces a concise Python guide for integrating Arcade tools with an LLM.

  • New guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx with step-by-step setup: project init, env vars, retrieving formatted tool definitions, auth+execute helper, multi-turn ReAct-style loop (invoke_llm), and interactive chat() example using OpenRouter
  • Adds the page to agent-frameworks navigation via _meta.tsx
  • Updates public/llms.txt to index/link the new "Connect Arcade to your LLM" doc

Written by Cursor Bugbot for commit 5f501dd. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Dec 10, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
docs Ready Ready Preview, Comment Jan 19, 2026 8:38pm

Request Review

"content": tool_result,
})

continue
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Missing assistant message before tool results in history

When the LLM returns tool calls, the code appends tool result messages to history but never appends the assistant message that contained the tool_calls. The OpenAI API (and compatible APIs like OpenRouter) requires the assistant message with tool_calls to appear in the conversation history before the corresponding tool result messages. This will cause an API error on the next iteration of the loop when the malformed history is sent back to the model.

Additional Locations (1)

Fix in Cursor Fix in Web

Copy link
Contributor

@nearestnabors nearestnabors left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need more openrouter tokens to actually test that it works, but wanted to share what I have so far!

torresmateo and others added 2 commits December 16, 2025 14:58
Co-authored-by: RL "Nearest" Nabors <[email protected]>
Co-authored-by: RL "Nearest" Nabors <[email protected]>

# Print the latest assistant response
assistant_response = history[-1]["content"]
print(f"\n🤖 Assistant: {assistant_response}\n")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Tool result shown as assistant response when max_turns exhausted

When invoke_llm exhausts max_turns while the assistant is still making tool calls, the function returns with a tool response as the last history item. The chat() function then accesses history[-1]["content"] and prints it prefixed with "🤖 Assistant:", displaying raw tool output as if it were the assistant's response. This produces confusing output when many consecutive tool calls are needed.

Additional Locations (1)

Fix in Cursor Fix in Web

Copy link
Contributor

@nearestnabors nearestnabors left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still getting the same malformed directory structure after running the commands in step 1:
Screenshot 2025-12-17 at 5 03 31 PM

Screenshot 2025-12-17 at 5 03 23 PM

@torresmateo torresmateo enabled auto-merge (squash) December 19, 2025 16:44
@torresmateo torresmateo disabled auto-merge December 19, 2025 17:10
@torresmateo
Copy link
Collaborator Author

holding the merge until nav is merged
FYI @nearestnabors


# Print the latest assistant response
assistant_response = history[-1]["content"]
print(f"\n🤖 Assistant: {assistant_response}\n")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Max turns exceeded causes tool result printed as response

Medium Severity

The chat() function assumes history[-1] is always an assistant message and prints its content as the assistant's response. However, when invoke_llm exits because max_turns is exceeded while still processing tool calls, history[-1] is a tool message with role: "tool". This causes the raw tool result (likely a JSON string) to be displayed to the user as "🤖 Assistant:" output, creating confusing behavior.

🔬 Verification Test

Why verification test was not possible: This edge case requires the LLM to continuously make tool calls until the max_turns limit (5) is reached without providing a final response. This is difficult to trigger reliably in testing as it depends on LLM behavior and requires actual API credentials. The logic flaw is evident from code inspection: when the while loop exits due to turns >= max_turns during tool processing, no assistant message is appended, yet chat() unconditionally treats history[-1]["content"] as the assistant response.

Additional Locations (1)

Fix in Cursor Fix in Web

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

// Handle all interrupts
const decisions: any[] = [];
for (const interrupt of interrupts) {
decisions.push(await handleAuthInterrupt(interrupt, rl));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function called with extra parameter it doesn't accept

Medium Severity

The handleAuthInterrupt function is defined with a single parameter (interrupt: Interrupt) at lines 297-321, but it's called with two parameters handleAuthInterrupt(interrupt, rl) at lines 414 and 718. The rl parameter (readline interface) is passed but never used, suggesting missing functionality in the function definition.

Additional Locations (1)

Fix in Cursor Fix in Web

useEffect(() => {
if (!posthog) {
return;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PostHog initialization removed but components still use it

High Severity

The app/_components/posthog.tsx file containing posthog.init() and PostHogProvider is completely deleted, but multiple components still import and use posthog directly from posthog-js. Without initialization, posthog.capture() calls will silently fail (no analytics), and posthog.onFeatureFlags() and posthog.getSurveys() in EarlyAccessRegistrySurvey will not work, breaking the survey functionality entirely.

Fix in Cursor Fix in Web

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style Review

Found 12 style suggestion(s).

Powered by Vale + Claude


### Write a helper function that handles the LLM's invocation

There are many orchestration patterns that can be used to handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern we will implement in this example.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

write-good.ThereIs: Removed 'There are' and changed 'we will' to 'you will'

Suggested change
There are many orchestration patterns that can be used to handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern we will implement in this example.
Many orchestration patterns handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern you will implement in this example.

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style Review

Found 1 style suggestion(s).

Powered by Vale + Claude


### Write a helper function that handles the LLM's invocation

There are many orchestration patterns that can be used to handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern we will implement in this example.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

write-good.ThereIs: Removed 'There are' sentence starter and changed 'we will implement' to 'you will implement' per style guide

Suggested change
There are many orchestration patterns that can be used to handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern we will implement in this example.
Many orchestration patterns can handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern you will implement in this example.

@torresmateo torresmateo merged commit 585fdb3 into main Jan 19, 2026
7 checks passed
@torresmateo torresmateo deleted the mateo/dev-25-write-connecting-arcade-tools-to-your-llm-page branch January 19, 2026 20:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants